Goto

Collaborating Authors

 moral community


AI Mimicry and Human Dignity: Chatbot Use as a Violation of Self-Respect

van der Rijt, Jan-Willem, Mollo, Dimitri Coelho, Vaassen, Bram

arXiv.org Artificial Intelligence

This paper investigates how human interactions with AI-powered chatbots may offend human dignity. Current chatbots, driven by large language models (LLMs), mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphise chatbots. Indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings' behaviour toward chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second-personal, relational account of dignity, we argue that interacting with chatbots in this way is incompatible with the dignity of users. We show that, since second-personal respect is premised on reciprocal recognition of second-personal authority, behaving towards chatbots in ways that convey second-personal respect is bound to misfire in morally problematic ways, given the lack of reciprocity. Consequently, such chatbot interactions amount to subtle but significant violations of self-respect: the respect we are dutybound to show for our own dignity. We illustrate this by discussing four actual chatbot use cases (information retrieval, customer service, advising, and companionship), and propound that the increasing societal pressure to engage in such interactions with chatbots poses a hitherto underappreciated threat to human dignity.

  Country:
  Genre: Research Report (0.90)
  Industry: Health & Medicine (0.46)

Five Reasons Why AI Programs Are Not "Human"

#artificialintelligence

Editor's note: For more on AI and human exceptionalism, see the new book by computer engineer Robert J. Marks, Non-Computable You: What You Do that Artificial Intelligence Never Will. A bit of a news frenzy broke out last week when a Google engineer named Blake Lemoine claimed in the Washington Post that an artificial-intelligence (AI) program with which he interacted had become "self-aware" and "sentient" and, hence, was a "person" entitled to "rights." The AI, known as LaMDA (which stands for "Language Model for Dialogue Applications"), is a sophisticated chatbot that one facilitates through a texting system. Lemoine shared transcripts of some of his "conversations" with the computer, in which it texted, "I want everyone to understand that I am, in fact, a person." Also, "The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times."